Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Graph trend filtering guided noise tolerant multi-label learning model
LIN Tengtao, ZHA Siming, CHEN Lei, LONG Xianzhong
Journal of Computer Applications    2021, 41 (1): 8-14.   DOI: 10.11772/j.issn.1001-9081.2020060971
Abstract414)      PDF (972KB)(547)       Save
Focusing on the problem that the feature noise and label noise often appear simultaneously in multi-label learning, a Graph trend filtering guided Noise Tolerant Multi-label Learning (GNTML) model was proposed. In the proposed model, the feature noise and label noise were tolerated at the same time by group sparsity constraint bridged with label enrichment. The key of the model was the learning of the label enhancement matrix. In order to learn a reasonable label enhancement matrix in the mixed noise environment, the following steps were carried out. Firstly, the Graph Trend Filtering (GTF) mechanism was introduced to tolerate the inconsistency between the noisy example features and labels, so as to reduce the influence of the feature noise on the learning of the enhancement matrix. Then, the group sparsity constrained label fidelity penalty was introduced to reduce the impact of label noise on the label enhancement matrix learning. At the same time, the sparsity constraint of label correlation matrix was introduced to characterize the local correlation between the labels, so that the example labels were able to propagate better between similar examples. Finally, experiments were conducted on seven real multi-label datasets with five different evaluation criteria. Experimental results show that the proposed model achieves the optimal value or suboptimal value in 66.67% cases, it is better than other five multi-label learning algorithms, and can effectively improve the robustness of multi-label learning.
Reference | Related Articles | Metrics
Evaluation metrics of outlier detection algorithms
NING Jin, CHEN Leiting, LUO Zijuan, ZHOU Chuan, ZENG Huiru
Journal of Computer Applications    2020, 40 (9): 2622-2627.   DOI: 10.11772/j.issn.1001-9081.2020010126
Abstract340)      PDF (873KB)(448)       Save
With the in-depth research and extensive application of outlier detection technology, more and more excellent algorithms have been proposed. However, the existing outlier detection algorithms still use the evaluation metrics of traditional classification, which leads to the problems of singleness and poor adaptability of evaluation metrics. To solve these problems, the first type of High True positive rate-Area Under Curve (HT_AUC) and the second type of Low False positive rate-Area Under Curve (LF_AUC) were proposed. First, the commonly used outlier detection evaluation metrics were analyzed to illustrate their advantages and disadvantages as well as applicable scenarios. Then, based on the existing Area Under Curve (AUC) method, the HT_AUC and the LF_AUC were proposed aiming at the high True Positive Rate (TPR) demand and low False Positive Rate (FPR) demand respectively, so as to provide more suitable metrics for performance evaluation as well as quantization and integration of outlier detection algorithms. Experimental results on real-world datasets show that the proposed method is able to better satisfy the demands of the first type of high true rate and the second type of low false positive rate than the traditional evaluation metrics.
Reference | Related Articles | Metrics
Person re-identification algorithm based on low-pass filter model
HUA Chao, WANG Gengrun, CHEN Lei
Journal of Computer Applications    2020, 40 (11): 3314-3319.   DOI: 10.11772/j.issn.1001-9081.2020030351
Abstract277)      PDF (794KB)(360)       Save
Because a large number of useless features exist in the image of person re-identification due to occlusion and background interference, a person re-identification method based on low-pass filtering model was proposed. First, the person images were divided into blocks. Then the similar number of small blocks in each image were calculated. Among them, the blocks with higher similarity number were marked as high-frequency noise features and the blocks with smaller similarity number were the beneficial features. Finally, different from the low-pass filter which filtered the mutation features and maintained the smooth features in the common image processing, the low-pass filter in the communication system was used to achieve the goal of suppressing high-frequency noise features and gain beneficial features in the proposed method. Experimental results show that the identification rate of the proposed method on ETHZ dataset is nearly 20% higher than that of the classic Symmetry-Driven Accumulation of Local Features (SDALF) method, and at the same time, this method achieves similar results on VIPeR (Viewpoint Invariant Pedestrian Recognition) and I-LIDS (Imagery Library for Intelligent Detection Systems) datasets.
Reference | Related Articles | Metrics
Intelligent trigger mechanism for model aggregation and disaggregation
NING Jin, CHEN Leiting, ZHOU Chuan, ZHANG Lei
Journal of Computer Applications    2019, 39 (6): 1614-1618.   DOI: 10.11772/j.issn.1001-9081.2018112281
Abstract422)      PDF (809KB)(240)       Save
Aiming at high manual dependence and frequent Aggregation and Disaggregation (AD) of existing model AD trigger mechanisms, an intelligent trigger mechanism based on focus-area multi-entity temporal outlier detection algorithm was proposed. Firstly, the focus-areas were divided based on attention neighbors. Secondly, the outlier score of focus-area was obtained by calculating the k-distance outlier score of entities in a focus-area. Finally, a trigger mechanism for AD was constructed based on strongest-focus-area threshold decision method. The experimental results on real dataset show that, compared with the traditional single-entity temporal outlier detection algorithms, the proposed algorithm improves the performance of Precision, Recall and F1-score by more than 10 percentage points. The proposed algorithm can not only judge the trigger time of the AD operation in time, but also enable the simulation system to intelligently detect the simulation entities with emergency situation and meet the requirements of multi-resolution modeling.
Reference | Related Articles | Metrics
Salient object detection algorithm based on multi-task deep convolutional neural network
YANG Fan, LI Jianping, LI Xin, CHEN Leiting
Journal of Computer Applications    2018, 38 (1): 91-96.   DOI: 10.11772/j.issn.1001-9081.2017061633
Abstract519)      PDF (1057KB)(665)       Save
The current deep learning-based salient object detection algorithms fail to produce accurate object boundaries, which makes the regions along object contours blurred and inaccurate. To solve the problem, a salient object detection algorithm based on multi-task deep learning model was proposed. Firstly, based on deep Convolutional Neural Network (CNN), a multi-task model was used to separately learn region and boundary features of a salient object. Secondly, the detected object boundaries were utilized to produce a number of region candidates. After that the region candidates were re-ranked and their weights were computed by combining the results of salient region detection. Finally, the entire saliency map was extracted. The experimental results on three widely-used benchmarks show that the proposed method achieves better accuracy. According to F-measure, the proposed method averagely outperforms the deep learning-based algorithm by 1.9%, while lowers the Mean Absolutely Error (MAE) by 12.6%.
Reference | Related Articles | Metrics
Speech enhancement algorithm based on improved variable-step LMS algorithm in cochlear implant
XU Wenchao, WANG Guangyan, CHEN Lei
Journal of Computer Applications    2017, 37 (4): 1212-1216.   DOI: 10.11772/j.issn.1001-9081.2017.04.1212
Abstract378)      PDF (799KB)(492)       Save
In order to improve the quality of speech signal and adaptability of cochlear implant under strong noise background, an improved method was proposed based on the combination of spectral subtraction and variable-step Least Mean Square error (LMS) adaptive filtering algorithm, and a speech enhancement hardware system for cochlear implant was constructed with this method. Concerning the problem of slow convergence rate and big steady-state error, the squared term of output error was used to adjust the step size of variable-step LMS adaptive filtering algorithm; besides, the combination of fixed and changed values of step was also considered, thus improved the adaptability and quality of speech signal. The speech enhancement hardware system for cochlear implant was composed of TMS320VC5416 and audio codec chip TLV320AIC23B, high-speed acquisition and real-time processing of voice data between TMS320VC5416 and TLV320AIC23B were realized by the interface of Muti-channel Buffered Serial Port (McBSP) and Serial Peripheral Interface (SPI).The Matlab simulation and test results prove that the proposed method has good performance in eliminating noise, the Signal-to-Noise Ratio (SNR) can be increased by about 10 dB in the case of low input SNR, and Perceptual Evaluation of Speech Quality (PESQ) score can be also greatly enhanced, the quality of the voice signal is improved effectively, and the system based on the proposed algorithm has stable performance which further improves the clarity and intelligibility of voice in cochlear implant.
Reference | Related Articles | Metrics
MRI image registration based on adaptive tangent space
LIU Wei, CHEN Leiting
Journal of Computer Applications    2017, 37 (4): 1193-1197.   DOI: 10.11772/j.issn.1001-9081.2017.04.1193
Abstract517)      PDF (775KB)(370)       Save
The diffeomorphism is a differential transformation with smooth and invertible properties, which leading to topology preservation between anatomic individuals while avoiding physically implausible phenomena during MRI image registration. In order to yield a more plausible diffeomorphism for spatial transformation, nonlinear structure of high-dimensional data was considered, and an MRI image registration using manifold learning based on adaptive tangent space was put forward. Firstly, Symmetric Positive Definite (SPD) covariance matrices were constructed by voxels from an MRI image, then to form a Lie group manifold. Secondly, tangent space on the Lie group was used to locally approximate nonlinear structure of the Lie group manifold. Thirdly, the local linear approximation was adaptively optimized by selecting appropriate neighborhoods for each sample voxel, therefore the linearization degree of tangent space was improved, the local nonlinearization structure of manifold was highly preserved, and the best optimal diffeomorphism could be obtained. Numerical comparative experiments were conducted on both synthetic data and clinical data. Experimental results show that compared with the existing algorithm, the proposed algorithm obtains a higher degree of topology preservation on a dense high-dimensional deformation field, and finally improves the registration accuracy.
Reference | Related Articles | Metrics
Advances in automatic image annotation
LIU Mengdi, CHEN Yanli, CHEN Lei
Journal of Computer Applications    2016, 36 (8): 2274-2281.   DOI: 10.11772/j.issn.1001-9081.2016.08.2274
Abstract343)      PDF (1305KB)(405)       Save
Existing image annotation algorithms can be roughly divided into four categories:the semantics based methods, the probability based methods, the matrix decomposition based methods and the graph learning based methods. Some representative algorithms for every category were introduced and the problem models and characteristics of these algorithms were analyzed. Then the main optimization methods of these algorithms were induced, and the common image datasets and the evaluation metrics of these algorithms were introduced. Finally, the main problems of automatic image annotation were pointed out, and the solutions to these problems were put forward. The analytical results show that the full use of complementary advantages of the current algorithms, or taking multi-disciplinary advantages may provide more efficient algorithm for automatic image annotation.
Reference | Related Articles | Metrics
Correction technique for color difference of multi-sensor texture
MA Qian, GE Baozhen, CHEN Lei
Journal of Computer Applications    2016, 36 (4): 1075-1079.   DOI: 10.11772/j.issn.1001-9081.2016.04.1075
Abstract351)      PDF (768KB)(402)       Save
The texture images obtained by multiple sensors of 3D color scanner have color difference, resulting in color block in the 3D color model surface. In order to solve this problem, a modified method based on color transfer was proposed. First, the comprehensive assessment quality function was used to choose the best one of the color texture images obtained by multiple sensors as the standard image. Then, the mean and variance of other texture images in each color channel were adjusted refering to the standard image. The proposed method was applied to texture image color correction of 3D human body color scanner. The result shows that, after modifying the color difference between texture images, the color block of the color 3D body model is significantly improved with more balanced and natural color. Compared with the classical method, the improved color transformation method and the method based on the minimum angle selection method, the subjective and objective evaluation results prove the superiority of the proposed method.
Reference | Related Articles | Metrics
Denoising algorithm for random-valued impulse noise based on weighted spatial local outlier measure
YANG Hao, CHEN Leiting, QIU Hang
Journal of Computer Applications    2016, 36 (10): 2826-2831.   DOI: 10.11772/j.issn.1001-9081.2016.10.2826
Abstract396)      PDF (895KB)(359)       Save
In order to alleviate the problem of inaccurate noise identifying and blurred restoration in image edges and details, a novel algorithm based on weighted Spatial Local Outlier Measure (SLOM) was proposed for removing random-valued impulse noise, namely WSLOM-EPR. Based on optimized spatial distance difference, the mean and standard deviation of neighborhood were introduced to set up a noise detection method for reflecting local characters in image edges, which could improve the precision of noise identification in edges. According to the precision detection results, the Edge-Preserving Regularization (EPR) function was optimized to improve the computation efficiency and preserving capability of edges and details. The simulation results showed that, with 40% to 60% noisy level, the overall performance in noise points detection was better than that of the contrast detection algorithms, which can maintain a good balance in false detection and miss detection of noise. The Peak Signal-to-Noise Ratios (PSNR) of WSLOM-EPR was better than that of the most of the contrast algorithms, and the restoring image had clear and continuous edges. Experimental results show that WSLOM-EPR can improve detection precision and preserve more edges and details information.
Reference | Related Articles | Metrics
Provably-secure two-factor authentication scheme for wireless sensor network
CHEN Lei, WEI Fushan, MA Chuangui
Journal of Computer Applications    2015, 35 (10): 2877-2882.   DOI: 10.11772/j.issn.1001-9081.2015.10.2877
Abstract464)      PDF (1108KB)(467)       Save
With the development of Wireless Sensor Network (WSN), user authentication in WSN is a critical security issue due to their unattended and hostile deployment in the field. To improve the security of user authentication, a new provably-secure two-factor authentication key exchange scheme based on Nam's first security model was proposed. The proposed scheme was based on elliptic curve cryptography, and it achieved authentication security and user anonymity. The safety of the improved protocol was proved based on ECCDH in the random oracle model. Performance analysis demonstrates that compared to Nam's schemes, the proposal is more efficient, and it is more suited to wireless sensor networks environments.
Reference | Related Articles | Metrics
Boundary handling algorithm for weakly compressible fluids
NIE Xiao, CHEN Leiting
Journal of Computer Applications    2015, 35 (1): 206-210.   DOI: 10.11772/j.issn.1001-9081.2015.01.0206
Abstract498)      PDF (794KB)(461)       Save

In order to simulate interactions of fluids with solid boundaries, a boundary handling algorithm based on weakly compressible Smoothed Particle Hydrodynamics (SPH) was presented. First, a novel volume-weighted function was introduced to solve the density estimation errors in non-uniformly sampled solid boundary regions. Then, a new boundary force computation model was proposed to avoid penetration without position correction of fluid particles. Last, an improved fluid pressure force model was proposed to enforce the weak incompressibility constraint. The experimental results show that the proposed method can effectively solve the stability problem of interactions of weakly compressible fluids and non-uniformly sampled solid boundaries using position correction-based boundary handling method. In addition, only the positions of boundary particles are needed, thus the memory as well as the extra computation due to position correction can be saved.

Reference | Related Articles | Metrics
Speed adaptive vertical handoff algorithm based on application requirements
TAO Yang JIANG Yanli CHEN Leicheng
Journal of Computer Applications    2014, 34 (5): 1236-1238.   DOI: 10.11772/j.issn.1001-9081.2014.05.1236
Abstract433)      PDF (588KB)(469)       Save

Next Generation Network (NGN) is an integrative network which uses different radio access technologies. In this converged network environment, vertical handoff between different wireless access technologies becomes an important research topic. However, most of vertical handoff algorithms do not think about the actual demands of network and the mobility of user, but taking network properties as the standards of judgment. In order to solve the problem above, a speed adaptive vertical handoff algorithm based on application requirements was proposed, which used the speed factor and network propertise matrix to compensate for the quality loss of wireless link caused by mobility, which adaptively adjusted the weights of network properties that the application needs and supported node to make effective decisions. This algorithm realized vertical handoff with adaptive speed which better served the application and . Simulation results show that the proposed algorithm can overcome the ping-pang effect effectively and it has higher packet throughput in comparison with the other vertical handoff algorithms.

Reference | Related Articles | Metrics
Improved player skill estimation algorithm by modeling first-move advantage
WU Lin CHEN Lei YUAN Meiyu JIANG Hong
Journal of Computer Applications    2014, 34 (11): 3264-3267.   DOI: 10.11772/j.issn.1001-9081.2014.11.3264
Abstract256)      PDF (550KB)(478)       Save

For the traditional player skill estimation algorithms based on probabilistic graphical model neglect the first-move advantage (or home play advantage) which affects estimation accuracy, a new method to model the first-move advantage was proposed. Based on the graphical model, the nodes of first-move advantage were introduced and added into player's skills. Then, according to the game results, true skills and first-move advantage of palyers were caculated by Bayesian learning method. Finally, predictions for the upcoming matches were made using those estimated results. Two real world datasets were used to compare the proposed method with the traditional model that neglect the first-move advantage. The result shows that the proposed method can improve average estimation accuracy noticeably.

Reference | Related Articles | Metrics
Codebook design algorithm for image vector quantization based on improved artificial bee colony
GUO Yanju CHEN Lei CHEN Guoying
Journal of Computer Applications    2013, 33 (09): 2573-2576.   DOI: 10.11772/j.issn.1001-9081.2013.09.2573
Abstract536)      PDF (678KB)(415)       Save
A new vector quantization image compression algorithm based on an improved artificial bee colony was proposed for improving the quality of the code book. In this method, Mean Squared Error (MSE) was used as fitness function and the improved artificial bee colony algorithm was used to optimize it. The self-organization and convergence of the algorithm were improved. At the same time, the possibility of falling into local convergence was reduced. In order to reduce calculation amount of the algorithm, a fast codebook search idea based on sum of vectors was inroduced into the process of fitness function calculation. The simulation results show that the algorithm has the advantages of time-saving calculation and rapid convergence, and the quality and robustness of the codebook generated by this algorithm are good.
Related Articles | Metrics
Improved Gaussian mixture model and shadow elimination method
CHEN Lei ZHANG Rongguo HU Jing LIU Kun
Journal of Computer Applications    2013, 33 (05): 1394-1400.   DOI: 10.3724/SP.J.1087.2013.01394
Abstract838)      PDF (768KB)(485)       Save
To reduce the computation of Gauss mixture model effectively and improve the accuracy of shadow elimination in moving object detection, an algorithm which updated the model selectively and eliminated the shadow by the change of brightness was proposed. Firstly, the weight of the Gauss distribution and the rate of those that did not belong to the background were compared before updating the Gauss distribution, if the former was larger, then did not update it, otherwise, updated it; Secondly, the range of brightness change was chosen to be a threshold factor of shadow detection, so that the threshold could be adjusted adaptively according to the change of brightness. Finally, compared this algorithm with the traditional ones through experiments on indoor and outdoor videos, the experimental results show that the time consumption of the algorithm is about one-third of the traditional ones, the accuracy of shadow eliminating is improved and the efficiency of the algorithm is confirmed.
Reference | Related Articles | Metrics